ベイジアンは、係数 \(\alpha^{true} , \beta^{true}\) について確率変数 \(\alpha,\beta\) の実現値、つまり「確率分布から実現した値」と考える。
この値は当然誰にもわからないため、事前分布(prior distribution)と尤度(likelihood ; データより算出)を基に \(\alpha,\beta\) が従う確率分布を計算することで、確率変数 \(\alpha,\beta\) について、どの値が「実現しやすいか」を明らかにする。
この計算後の確率分布は一般的に事後分布(posterior distribution)と呼ばれる。
ここで、 \(\epsilon_i\) について分布が仮定されていることに注意。さらにその標準偏差 \(\sigma^{true}\) も推測したいため、確率変数 \(\sigma\) の実現した値として考える。
ベイズの定理により式としては簡単に導出可能。
(1変数) 確率変数 \(\beta\) のある特定の \(\beta^{*}\) における事後確率密度は
\[ \begin{eqnarray} f(\,\beta=\beta^{*}\,|\,Data\,) &=& \frac{ \,f(\,Data\,|\,\beta=\beta^{*}\,) \cdot f(\,\beta=\beta^{*}\,)}{f(Data)}\\ \\ &=& \frac{f(\,Data\,|\,\beta=\beta^{*}\,) \cdot f(\,\beta=\beta^{*}\,)}{\int_{-\infty}^{\infty} \,f(\,Data\,|\,\beta=\beta^{real}\,) \cdot f(\,\beta=\beta^{real}\,)\,\,d\beta^{real}} \end{eqnarray} \]
ここで
\(f(\,\beta=\beta^{*}\,)\) … 事前確率密度 (データ取得前の確率変数 \(\beta\) のある特定の \(\beta^{*}\) における確率密度)
\(f(\,Data\,|\,\beta=\beta^{*}\,)\) … 尤度 (確率変数 \(\beta\) がある特定の値 \(\beta^{*}\) をとるときの手元のデータが得られる確率(密度))
\(\int_{-\infty}^{\infty} \,f(\,Data\,|\,\beta=\beta^{real}\,)\cdot f(\,\beta=\beta^{real}\,)\,\,d\beta^{real}\) … 手元のデータが得られる確率(密度)であり、確率変数 \(\beta\) がとり得る実現値 \(\beta^{real}\) の全
てについて、事前分布の下その値が実現し、かつその値の下で手元のデータが得られ
る確率(密度)を合計している
(多変数) 確率変数 \(\alpha,\beta,\sigma\) のある特定の \(\alpha^{*},\beta^{*},\sigma^{*}\) における事後(同時)確率密度は
\[
\begin{eqnarray}
f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,|\,Data\,) &=& \frac{ \,f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) \cdot f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,)}{f(Data)}\\
\\
&=& \frac{f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) \cdot f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,)}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,Data\,|\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,) \cdot f(\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,)\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}}
\end{eqnarray}
\]
まずは尤度を考える。
尤度とは、あるパラメータの下で手元のデータが手に入る確率(のようなもの)である。 1つのデータ \((x_1,y_1)\) が手に入ったとき、\(\alpha = \alpha^{*},\beta=\beta^{*},\sigma = \sigma^{*}\)のもとでそのデータが手に入る確率は \[
\begin{eqnarray}
y_1 = \alpha^{*} + \beta^{*} x_1 + \epsilon_i \quad,\quad \epsilon_i \sim Normal\,(0,{\sigma^{*}}^2)
\end{eqnarray}
\] より \[
\begin{eqnarray}
\epsilon_i =\,\, &y_1 - (\alpha^{*} + \beta^{*} x_1)& \quad,\quad \epsilon_i \sim Normal\,(0,{\sigma^{*}}^2)\\
\\
&y_1 - (\alpha^{*} + \beta^{*} x_1)& \,\sim Normal\,(0,{\sigma^{*}}^2)\\
\end{eqnarray}
\] よって \[
\begin{eqnarray}
f((x_1,y_1)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}) = \frac{1}{\sqrt{2\pi}\sigma^{*}} \, \exp\left[-\frac{\{y_1-(\alpha^{*} + \beta^{*} x_1) -0\}^2}{2{\sigma^{*}}^2}\right]
\end{eqnarray}
\]
これが「尤度」となる。(\(\alpha^{*},\beta^{*},\sigma^{*}\) を色々変えることで尤度関数が手に入る)
同様に、\(N\)個の独立なデータ \((x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\) が手に入ったとき、\(\alpha = \alpha^{*},\beta=\beta^{*},\sigma = \sigma^{*}\) のもとでそのデータが手に入る確率はそのデータが手に入る確率は
\[
\begin{eqnarray}
f((x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}) &=& f((x_1,y_1)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*})\cdot f((x_2,y_2)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*})\cdot \,\, ... \,\\
\\
&=& \prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{*}} \, \exp\left[-\frac{\{y_i-(\alpha^{*} + \beta^{*} x_i)-0\}^2}{2{\sigma^{*}}^2}\right]
\end{eqnarray}
\]
\(N\)個のデータの場合はこれが「尤度」になる。
次に「事前分布」(事前確率密度関数)を考える。
基本的には、「事前分布」には分析前の時点で \(\alpha,\beta\) が従うと考えられる確率分布を設定すべきである。
一方で、その設定には注意が必要である。
というのも、尤度と相性の悪い事前分布を設定すると事後確率密度の式 \(f(\,\beta=\beta^{*}\,|\,Data\,)= \frac{f(\,Data\,|\,\beta=\beta^{*}\,) \cdot f(\,\beta=\beta^{*}\,)}{\int_{-\infty}^{\infty} \,f(\,Data\,|\,\beta=\beta^{real}\,) \cdot f(\,\beta=\beta^{real}\,)\,\,d\beta^{real}}\) が複雑になりすぎてしまい、分母の積分をClosed Formにできないのである。この場合、MCMCが必要になる。
豊田 編著(2015)によると設定には2つの方針がある。
①「自然共役事前分布 (Conjugated Prior Distribution)」
こちらは事後分布の計算のしやすさを重視する方針である。 この場合、事後分布が「知っている分布」になるため、MCMCをせずともその期待値、最頻値などがすぐに求まるというメリットがある。
(MCMC(特にHMC法)が簡単にできる現在において求めやすい事後分布にする必要性は無いともいえるが)
| 尤度 | 自然共役事前分布 | 事後分布 |
|---|---|---|
| 正規分布(平均) | 正規分布 | 正規分布 |
| 正規分布(分散) | 逆ガンマ分布 | 逆ガンマ分布 |
| ベルヌーイ分布 | ベータ分布 | ベータ分布 |
| 2項分布 | ベータ分布 | ベータ分布 |
| ポアソン分布 | ガンマ分布 | ガンマ分布 |
今回の場合 → 正規分布の平均をモデル化し尤度を計算しているので事前分布に正規分布?
②「無情報事前分布 (Non-informative Prior Distribution)」
①の方針は、「事後分布が求めやすいように」事前分布を設定するという点で恣意的だという批判がある。Box and Tiao(1973) によって事後分布にできるだけ影響を与えない事後分布を設定するという考え方が提案された。
例: \((-\infty,\infty)\) の一様分布
・どんな分布?
一般に、\((a,b)\) の一様分布に従う \(\beta\) の確率密度関数は、
\[ f(\beta=\beta^{*}) = \frac{1}{b-a} = \, C \, (定数) \]
ここで \(a \to -\infty, \, b \to \infty\) とすると
\[
f(\beta = \beta^{*}) = \,C\, = \lim_{a\to -\infty, b\to \infty}\frac{1}{b-a} = \,0
\]
となり確率密度がどこでも \(0\) になってしまう。( →変則分布(Improper Distribution))
事前確率密度がどこでも \(0\) のとき、事後確率密度もどこでも \(0\) (\(\because 事後確率密度の分子 = 尤度 ×事前確率密度\))
→ 尤度の情報が消えてしまう
この問題に対処するには
\(C\) を \(0\) ではなくとてもとても小さな値 \(\delta\) とみなす。
\((a,b)\) の一様分布で、十分に小さな \(a\) 、十分に大きな \(b\) を設定。 (例: \(a=-10000,\, b=10000,\, C=\frac{1}{20000}\))
前者は積分して1にならない(無限大になる)点に注意 (手続き上問題はない)。後者の方針の方が数理的に破綻が無いとされる。
いずれにしてもその区間がパラメータの定義域を全て覆うような一様分布を事前分布として用いた時の事後確率密度関数(確率変数\(\sigma\)は省略)は、
\[
\begin{eqnarray}
f(\,\alpha=\alpha^{*},\beta=\beta^{*}\,|\,Data\,) &=& \frac{ \,f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*}\,) \cdot f(\,\alpha=\alpha^{*},\beta=\beta^{*}\,)}{f(Data)}\\
\\
&=& \frac{f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*}\,) \cdot f(\,\alpha=\alpha^{*},\beta=\beta^{*}\,)}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,Data\,|\,\alpha=\alpha^{real},\beta=\beta^{real}\,) \cdot f(\,\alpha=\alpha^{real},\beta=\beta^{real}\,)\,\,d\alpha^{real}\,d\beta^{real}} \\
\\
&=& \frac{f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*}\,) \cdot C}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,Data\,|\,\alpha=\alpha^{real},\beta=\beta^{real}\,) \cdot C\,\,d\alpha^{real}\,d\beta^{real}}\\
\\
&=& \frac{C \cdot f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*}\,)}{C \cdot \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,Data\,|\,\alpha=\alpha^{real},\beta=\beta^{real}\,)\,d\alpha^{real}\,d\beta^{real}}\\
\\
&=& \frac{f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*}\,)}{ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,Data\,|\,\alpha=\alpha^{real},\beta=\beta^{real}\,)\,d\alpha^{real}\,d\beta^{real}}\\
\\
&\propto& f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*}\,)
\end{eqnarray}
\]
となり、完全に尤度に比例する。 (\(\therefore カーネル = 尤度\))
豊田 編著(2015)では、「公的」な分析に関しては一般に広く認められた事前信念がない限りこうした情報の少ない事前分布を使うべきとしている。(一様分布のほか、コーシー分布、半コーシー分布など)
事前分布が3変量確率分布であることに注意。
確率変数 \(\alpha,\beta\) … \((a,b)\) の一様分布を採用(ただし、\(a\)はとても小さい値、\(b\)はとても大きい値)
確率変数 \(\sigma\) … \((0,b)\) の一様分布を採用(ただし、\(b\)はとても大きい値)
→ 範囲内においてどこでも密度は \(C=\frac{1}{(b-a)^2b}\) で一定
\(\left(C=\frac{1}{(b-a)^2b}\, の証明\right)\) \[
\begin{eqnarray}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} f(\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,)\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real} &=& 1\\
\\
\int_{0}^{b}\int_{a}^{b}\int_{a}^{b} C\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real} &=& 1\\
(\because \,\, f(\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,) &=& C \,(一定))\\\
\\
C \cdot \int_{0}^{b}\int_{a}^{b}\int_{a}^{b} 1\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real} &=& 1\\
\\
C \cdot \int_{0}^{b}\int_{a}^{b} [\alpha^{real}]_a^b \,\,d\beta^{real}\,d\sigma^{real} &=& 1\\
\\
C \cdot \int_{0}^{b}\int_{a}^{b} (b-a) \,\,d\beta^{real}\,d\sigma^{real} &=& 1\\
\\
C \cdot (b-a)\int_{0}^{b} [\beta^{real}]_a^b \,\,d\sigma^{real} &=& 1\\
\\
C \cdot (b-a)\int_{0}^{b} (b-a) \,\,d\sigma^{real} &=& 1\\
\\
C \cdot (b-a)^2 \,\, [\sigma^{real}]_0^b &=& 1\\
\\
C \cdot (b-a)^2b &=& 1\\
\\
C &=& \frac{1}{(b-a)^2b}
\end{eqnarray}
\]
\(N\)個の独立なデータ \((x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\) が手に入ったときのカーネルは、
\[
\begin{eqnarray}
f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,|\,Data\,)
&\propto& \,f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) \cdot f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,)\\
\\
&\fallingdotseq& f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) \cdot C \\
\\
&\propto& \,f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,)\\
\\
&=& f((x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*})\\
\\
&=& \prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{*}} \, \exp\left[-\frac{\{y_i-(\alpha^{*} + \beta^{*} x_i)-0\}^2}{2{\sigma^{*}}^2}\right]\\
\\
&=& \prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{*}} \, \exp\left[-\frac{(y_i - \alpha^{*} - \beta^{*} x_i)^2}{2{\sigma^{*}}^2}\right]\\
\\
\end{eqnarray}
\]
(参考)事後確率密度関数は、
\[
\begin{eqnarray}
f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,|\,Data\,)
&=& \frac{f(\,Data\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) \cdot f(\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,)}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,Data\,|\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,) \cdot f(\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,)\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}}\\
\\
&=& \frac{f(\,(x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) \cdot C}{C \cdot \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,(x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\,|\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,) \,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}}\\
\\
&=& \frac{f(\,(x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\,|\,\alpha=\alpha^{*},\beta=\beta^{*},\sigma=\sigma^{*}\,) }{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \,f(\,(x_1,y_1),(x_2,y_2),\,...\, ,(x_N,y_N)\,|\,\alpha=\alpha^{real},\beta=\beta^{real},\sigma=\sigma^{real}\,) )\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}}\\
\\
&=& \frac{\prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{*}} \, \exp\left[-\frac{(y_i - \alpha^{*} - \beta^{*} x_i)^2}{2{\sigma^{*}}^2}\right]}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{real}} \, \exp\left[-\frac{(y_i - \alpha^{real} - \beta^{real} x_i)^2}{2{\sigma^{real}}^2}\right]\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}}
\end{eqnarray}
\]
疑似データ
真の値 \(\alpha^{true}=6,\beta^{true}=2,\sigma^{true}=3\) のモデルから \((x_1,x_2,...,x_{100}) \sim Uniform(-10,10)\) の下で \((y_1,y_2,...,y_{100})\) を発生させる。
set.seed(5)
x_obs <- runif(100,-10,10)
y_obs <- 6 + 2*x_obs + rnorm(100,0,3)
d <- data.frame(x=x_obs,y=y_obs)
head(d)
## x y
## 1 -5.995711 -1.601676
## 2 3.704372 13.971922
## 3 8.337515 25.741100
## 4 -4.312011 -4.399526
## 5 -7.906997 -10.150597
## 6 4.021149 11.267439
g_s <- ggplot(d,aes(x=x,y=y)) + geom_point() + ggtitle("observation")
plot(g_s)
事前分布
事前分布の一様分布において、便宜上\(a=-10000,b=10000\)とする。
density_c <- 1/(20000*20000*10000)
alpha_real <- seq(-10,10,0.1)
prior_a <- rep(density_c,length(alpha_real))
plot(alpha_real,prior_a)
beta_real <- seq(-10,10,0.1)
prior_b <- rep(density_c,length(beta_real))
plot(beta_real,prior_b)
sigma_real <- seq(0.1,10,0.1)
prior_s <- rep(density_c,length(sigma_real))
plot(sigma_real,prior_s)
prior_ab <- matrix(density_c,nrow=length(alpha_real),ncol=length(beta_real))
\(\alpha,\beta\) の2次元で見ると
library(plotly)
fig <- plot_ly(x=~beta_real,y=~alpha_real, z=~prior_ab) %>% layout(scene=list(zaxis=list(nticks=4,range=c(0,1/2000000000000))))
fig <- fig %>% add_surface()
fig
尤度関数
#likelihood function
likelihood_f <- function(alpha,beta,sigma){
return(prod(dnorm(y_obs-alpha-beta*x_obs,mean=0,sd=sigma)))
}
likelihoods <- array(0,dim=c(length(alpha_real),length(beta_real),length(sigma_real)))
kernels <- array(0,dim=c(length(alpha_real),length(beta_real),length(sigma_real)))
for (i in 1:length(alpha_real)){
for (j in 1:length(beta_real)) {
for (k in 1:length(sigma_real)) {
likelihoods[i,j,k] <- likelihood_f(alpha_real[i],beta_real[j],sigma_real[k])
kernels[i,j,k] <- likelihoods[i,j,k]*density_c
}
}
}
尤度・カーネルの「最大値」と「最大値になるパラメータの値(MAP推定値=最大事後確率推定値)」
print(likelihoods[which.max(likelihoods)])
## [1] 2.205914e-108
print(kernels[which.max(kernels)])
## [1] 5.514785e-121
print(which(likelihoods==likelihoods[which.max(likelihoods)],arr.ind = TRUE))
## dim1 dim2 dim3
## [1,] 160 121 29
print(which(kernels==kernels[which.max(kernels)],arr.ind = TRUE))
## dim1 dim2 dim3
## [1,] 160 121 29
print(alpha_real[160])
## [1] 5.9
print(beta_real[121])
## [1] 2
print(sigma_real[29])
## [1] 2.9
map_idx <- which(likelihoods==likelihoods[which.max(likelihoods)],arr.ind = TRUE)
MAP推定値で切り出したプロット
d <- data.frame(alpha=alpha_real,density=kernels[,map_idx[2],map_idx[3]])
g_a <- ggplot(d,aes(x=alpha,y=density)) + geom_line() + ggtitle("alpha kernel")
plot(g_a)
d <- data.frame(beta=beta_real,density=kernels[map_idx[1],,map_idx[3]])
g_b <- ggplot(d,aes(x=beta,y=density)) + geom_line() + ggtitle("beta kernel")
plot(g_b)
d <- data.frame(sigma=sigma_real,density=kernels[map_idx[1],map_idx[2],])
g_s <- ggplot(d,aes(x=sigma,y=density)) + geom_line() + ggtitle("sigma kernel")
plot(g_s)
fig_2 <- plot_ly(x=~beta_real,y=~alpha_real,z=~kernels[ , ,map_idx[3]])
fig_2 <- fig_2 %>% add_surface()
fig_2
fig_3 <- plot_ly(x=~sigma_real,y=~beta_real,z=~kernels[map_idx[1], , ])
fig_3 <- fig_3 %>% add_surface()
fig_3
fig_4 <- plot_ly(x=~sigma_real,y=~alpha_real,z=~kernels[ ,map_idx[2], ])
fig_4 <- fig_4 %>% add_surface()
fig_4
事後周辺分布(正規化してないのでもどき)
md_a <- rep(0,length(alpha_real))
for (i in 1:length(alpha_real)){
sum(kernels[i,,]) -> md_a[i]
}
fig_5 <- plot_ly(x=~alpha_real,y=md_a)
fig_5
md_b <- rep(0,length(beta_real))
for (i in 1:length(beta_real)){
sum(kernels[,i,]) -> md_b[i]
}
fig_6 <- plot_ly(x=~beta_real,y=md_b)
fig_6
md_s <- rep(0,length(sigma_real))
for (i in 1:length(sigma_real)){
sum(kernels[,,i]) -> md_s[i]
}
fig_7 <- plot_ly(x=~sigma_real,y=md_s)
fig_7
print(alpha_real[which.max(md_a)])
## [1] 5.9
print(beta_real[which.max(md_b)])
## [1] 2
print(sigma_real[which.max(md_s)])
## [1] 2.9
md_ab <- matrix(0,nrow=length(alpha_real),ncol=length(beta_real))
for (i in 1:length(alpha_real)){
for (j in 1:length(beta_real)){
sum(kernels[i,j,]) -> md_ab[i,j]
}
}
fig_8 <- plot_ly(x=~beta_real,y=~alpha_real,z=~md_ab)
fig_8 <- add_surface(fig_8)
fig_8
最小二乗法による推定値
d <- data.frame(x=x_obs,y=y_obs)
g_s <- ggplot(d,aes(x=x,y=y)) + geom_point() + ggtitle("observation")
plot(g_s)
ols <- lm(y~x,data=d)
print(summary(ols))
##
## Call:
## lm(formula = y ~ x, data = d)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.1939 -1.5965 -0.0785 2.1349 7.6938
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.93850 0.29171 20.36 <2e-16 ***
## x 2.02294 0.04831 41.87 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.912 on 98 degrees of freedom
## Multiple R-squared: 0.9471, Adjusted R-squared: 0.9465
## F-statistic: 1753 on 1 and 98 DF, p-value: < 2.2e-16
print(sqrt(sum((ols$residuals)^2)/98))
## [1] 2.911704
library(rstan)
library(bayesplot)
#rstan_options(auto_write = TRUE)
options(mc.core = parallel::detectCores())
以下の通り「simple_reg_0507.stan」ファイルを作成
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
y ~ normal(alpha + beta*x, sigma);
}
ここでmodelブロックについて、 \[
y_i = \alpha^{true} + \beta^{true} x_i + \epsilon_i \quad,\quad \epsilon_i \sim Normal\,(0,{\sigma^{true}}^2)\\
\] より \[
y_i \sim Normal\,(\,\alpha^{true} + \beta^{true} x_i\,,\,{\sigma^{true}}^2\,)\\
\\
(\because \,\, y_i\,|\alpha^{true}\,,\beta^{true}\,,x_i \,\,\sim Normal\,(\,\alpha^{true} + \beta^{true} x_i\,,\,{\sigma^{true}}^2\,))
\] と変形して記述することに注意。
また、事前分布は「幅の広い一様分布」を採用するので、何も指定しない。
listを作る
data_list <- list(
N = 100,
y = y_obs,
x = x_obs
)
MCMCを実行
mcmc_result <- stan(
file = "simple_reg_0507.stan",
data = data_list,
seed = 1,
chains = 4,
warmup = 1000,
iter = 2000,
thin = 1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 9e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.9 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.045 seconds (Warm-up)
## Chain 1: 0.044 seconds (Sampling)
## Chain 1: 0.089 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 1.2e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0.047 seconds (Warm-up)
## Chain 2: 0.043 seconds (Sampling)
## Chain 2: 0.09 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 1.2e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 0.045 seconds (Warm-up)
## Chain 3: 0.046 seconds (Sampling)
## Chain 3: 0.091 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 2.1e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 0.043 seconds (Warm-up)
## Chain 4: 0.045 seconds (Sampling)
## Chain 4: 0.088 seconds (Total)
## Chain 4:
結果とMCMCサンプルの抽出
print(mcmc_result, probs = c(0.025,0.5,0.975))
## Inference for Stan model: anon_model.
## 4 chains, each with iter=2000; warmup=1000; thin=1;
## post-warmup draws per chain=1000, total post-warmup draws=4000.
##
## mean se_mean sd 2.5% 50% 97.5% n_eff Rhat
## alpha 5.94 0.00 0.29 5.36 5.94 6.52 3901 1
## beta 2.02 0.00 0.05 1.92 2.02 2.12 3975 1
## sigma 2.95 0.00 0.22 2.55 2.93 3.43 3747 1
## lp__ -156.36 0.03 1.29 -159.72 -156.01 -154.92 1842 1
##
## Samples were drawn using NUTS(diag_e) at Sun May 16 04:38:50 2021.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
mcmc_sample <- rstan::extract(mcmc_result,permuted=FALSE)
トレースプロットと事後分布
mcmc_combo(
mcmc_sample,
pars = c("alpha","beta","sigma")
)
通常の単回帰モデルの予測 \[
\hat{y_{next}} = \hat{\alpha} + \hat{\beta}x_{next}
\] \(\hat{\alpha} , \hat{\beta}\) … \(\alpha^{true} , \beta^{true}\) の点推定値 \(x_{next}\) … \(y_{next}\) の予測に用いる \(x\) の値
ベイズ推定でも、事後中央値や事後平均値を点推定値として、こうした点予測をすることは可能
EAP(sample_mean)推定値による点予測
x_next = c(-9.5,8,12,1,5.5)
alpha_hat <- mean(mcmc_sample[,,"alpha"])
beta_hat <- mean(mcmc_sample[,,"beta"])
sigma_hat <- mean(mcmc_sample[,,"sigma"])
y_next_hatp <- alpha_hat + beta_hat*x_next
fig_10 <- plot_ly(x=x_obs,y=y_obs,name='observation',type="scatter",mode ='markers')
fig_10 <- fig_10 %>% add_trace(x=x_next,y=y_next_hatp,name='point_predict',type="scatter",mode ='lines+marker')
fig_10
さらにベイジアンは誤差項に分布を仮定するので「予測分布」を出せる → 条件付き予測分布(Conditional Predictive Distribution)
これにより「95%予測区間(Prediction Interval)」が出せる
条件付き予測分布の期待値は \[
E\left[\hat{y_{next}}\right | \,\hat{\alpha},\hat{\beta}\,,\hat{\sigma}\,] = \hat{\alpha} + \hat{\beta}x_{next}
\]
条件付き予測分布は平均 \(\hat{\alpha}+\hat{\beta}x_{next}\), 分散 \(\hat{\sigma}\) の正規分布
つまり、条件付き予測分布において、\(\hat{y_{next}}\) がある特定の値 \(y^{*}\) をとる確率密度は \[
f(\hat{y_{next}} = y^{*}| \,\hat{\alpha},\hat{\beta},\hat{\sigma},x_{next}\,) = \frac{1}{\sqrt{2\pi}\hat{\sigma}} \, \exp\left[-\frac{\,\{y^{*} - (\hat{\alpha} + \hat{\beta}x_{next}) \}^2}{2{\hat{\sigma}^2}}\right]
\]
x_next = c(-9.5,8,12,1,5.5)
upp_96pi = rep(0,5)
low_96pi = rep(0,5)
for (i in 1:length(x_next)){
yhat_range <- seq(alpha_hat+beta_hat*x_next[i]-6,alpha_hat+beta_hat*x_next[i]+6,0.1)
yhat_prob <- dnorm(x=yhat_range,mean=alpha_hat+x_next[i]*beta_hat,sd=sigma_hat)
low_96pi[i] <- qnorm(0.05,mean=alpha_hat+x_next[i]*beta_hat,sd=sigma_hat)
upp_96pi[i] <- qnorm(0.95,mean=alpha_hat+x_next[i]*beta_hat,sd=sigma_hat)
fig <- plot_ly(x=yhat_range,y=~yhat_prob) %>%
layout(title = paste('x_next =',as.character(x_next[i]),
',95%PI=[',as.character(round(low_96pi[i],digits = 4)),
as.character(round(upp_96pi[i],digits = 4)),'],',
'PP =',as.character(round(alpha_hat+x_next[i]*beta_hat,digits = 4))))
}
fig
## No trace type specified:
## Based on info supplied, a 'scatter' trace seems appropriate.
## Read more about this trace type -> https://plotly.com/r/reference/#scatter
## No scatter mode specifed:
## Setting the mode to markers
## Read more about this attribute -> https://plotly.com/r/reference/#scatter-mode
fig_12 <- fig_10 %>%
add_trace(x=x_next,y=low_96pi,name='lower_95PI',type="scatter",mode ='lines+marker',line=list(color = 'yellow', width=2, dash='dot')) %>%
add_trace(x=x_next,y=upp_96pi,name='upper_95PI',type="scatter",mode ='lines+marker',line=list(color = 'yellow', width=2, dash='dot')) %>%
layout(title= 'Point Prediction and 95% Prediction Interval')
fig_12
パラメータの点推定値だけでなく分布が手に入る → 予測においてもこの分布の情報を活用したい
パラメータの分布を考慮したうえで導出する予測分布 → 事後予測分布(Posterior Predictive Distribution)
(実現値 \(\alpha^{real},\beta^{real},\sigma^{real}\) の下で \(\hat{y_{next}}\) が \(y^{*}\) をとる確率密度)× (その \(\alpha^{real},\beta^{real},\sigma^{real}\) が実現する確率密度 (=事後確率密度) )をすべての \(\alpha^{real},\beta^{real},\sigma^{real}\) について足し合わせる
→ \(\hat{y_{next}}\) が \(y^{*}\) をとる確率密度が出てくるはず
\[
\begin{eqnarray}
f(\hat{y_{next}} = y^{*}|Data,x_{next}) &=& \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \underset{\hat{y_{next}} = y^{*}となる条件付き確率密度}{\underline{f(\hat{y_{next}} = y^{*}| \,\alpha^{real},\beta^{real},\sigma^{real}\,,x_{next})}}\cdot
\underset{事後確率密度}{\underline{f(\,\alpha = \alpha^{real},\,\beta = \beta^{real},\,\sigma = \sigma^{real}|\,Data\,)}}\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}\\
\\
\\
&=& \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}\sigma^{real}} \, \exp\left[-\frac{\,(y^{*}-\alpha^{real}-\beta^{real}x_{next})^2}{2{{\sigma^{real}}^2}}\right]\cdot
\frac{\prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{real}} \, \exp\left[-\frac{(y_i - \alpha^{real} - \beta^{real} x_i)^2}{2{\sigma^{*}}^2}\right]}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \prod_{i=0}^{N}\frac{1}{\sqrt{2\pi}\sigma^{real}} \, \exp\left[-\frac{(y_i - \alpha^{real} - \beta^{real} x_i)^2}{2{\sigma^{real}}^2}\right]\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}}\,\,d\alpha^{real}\,d\beta^{real}\,d\sigma^{real}
\end{eqnarray}
\]
→ 非常に複雑なので、こちらの事後予測分布も乱数をサンプリングすることで推測する
事後予測分布の推測の流れ
MCMCにより\((\alpha^{real},\beta^{real},\sigma^{real})\) セットを4000組(ここはiter,chains次第だが)得ているはず
→ (事後確率密度関数の近似に対応)
予測用の \(x_{next}\) について、4000組それぞれに \(\alpha^{real} + \beta^{real}x_{next}\) を計算 (\(x_{next}\) は固定)
正規分布の乱数発生プログラムを用いて、\(Normal(\alpha^{real} + \beta^{real}x_{next},{\sigma^{real}}^2)\) を1組につき1つ発生させ,予測値 \(\hat{y_{next}}\) とする
→ (条件付き密度関数の近似に対応)
得られた4000個の予測値 \(\hat{y_{next}}\) でヒストグラムを描き、\(x_{next}\) による \(y_{next}\) の予測分布とする
実際にこの手続きを行うには、「.stan」ファイルを
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
int N_p; //予測するデータ(x)の数
vector[N_p] x_next; //予測に用いるxのベクトル
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
y ~ normal(alpha + beta*x, sigma);
}
generated quantities{
vector[N_p] y_next_hat; //yの予測値
y_next_hat = normal_rng(alpha + beta*x_next, sigma);
//Normal(alpha + beta*x_next, sigma)から乱数を発生
//それをyの予測値とする
}
と変更する必要がある
(変更後のファイルは「simple_reg_pred_0507.stan」として新しく保存)
generated quantities ブロックについて
モデルによる予測値など、「モデル・パラメータの推定には必要ないが別の目的で乱数を得たい object」を定義する。
data_list
data_list_pred <- list(
N = 100,
y = y_obs,
x = x_obs,
N_p = 5,
x_next = c(4,8,12,1,5.5)
)
mcmcの実行
mcmc_result_pred <- stan(
file = "simple_reg_pred_0507.stan",
data = data_list_pred,
seed = 1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 6.2e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.62 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.041 seconds (Warm-up)
## Chain 1: 0.039 seconds (Sampling)
## Chain 1: 0.08 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 1.1e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0.041 seconds (Warm-up)
## Chain 2: 0.041 seconds (Sampling)
## Chain 2: 0.082 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 1.2e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 0.042 seconds (Warm-up)
## Chain 3: 0.037 seconds (Sampling)
## Chain 3: 0.079 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 1.1e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 0.04 seconds (Warm-up)
## Chain 4: 0.044 seconds (Sampling)
## Chain 4: 0.084 seconds (Total)
## Chain 4:
結果(95%予測区間)
mcmc_intervals(
mcmc_result_pred,
regex_pars = c("y_next_hat."),
prob=0.8,
prob_outer=0.95
)
結果(予測分布)
mcmc_areas(
mcmc_result_pred,
pars = c("y_next_hat[1]","y_next_hat[5]"),
prob=0.6,
prob_outer=0.99
)
library(brms)
df <- data.frame(x_obs,y_obs)
simple_lm_brms <- brm(
formula = y_obs~x_obs,
family = gaussian(link="identity"), #期待値はalpha+beta*x,誤差項は正規分布
data = df,
seed=1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 4.5e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.45 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.041 seconds (Warm-up)
## Chain 1: 0.024 seconds (Sampling)
## Chain 1: 0.065 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 9e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0.028 seconds (Warm-up)
## Chain 2: 0.034 seconds (Sampling)
## Chain 2: 0.062 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 9e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 0.034 seconds (Warm-up)
## Chain 3: 0.028 seconds (Sampling)
## Chain 3: 0.062 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 1.1e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 0.042 seconds (Warm-up)
## Chain 4: 0.028 seconds (Sampling)
## Chain 4: 0.07 seconds (Total)
## Chain 4:
plot(simple_lm_brms)
#pred
new_df <- data.frame(x_obs = 10.5)
set.seed(1)
predict(simple_lm_brms,new_df)
## Estimate Est.Error Q2.5 Q97.5
## [1,] 27.18888 3.119873 21.004 33.40597
豊田秀樹 編著(2015)「基礎からのベイズ統計学・ハミルトニアンモンテカルロ法による実践的入門」朝倉書店、第3章 馬場 真哉(2019)「実践Data Scienceシリーズ RとStanではじめる ベイズ統計モデリングによるデータ分析入門」講談社、第3部第2章、第3部第3章